DRAFT 3/4/07 Bayesian Generic Priors for Causal Learning
نویسندگان
چکیده
We present a Bayesian model of causal learning that incorporates generic priors on distributions of weights representing potential powers to either produce or prevent an effect. These generic priors favor necessary and sufficient causes. The NS power model couples these priors with a causal generating function derived from the power PC theory (Cheng, 1997). We test this and other alternative Bayesian models using the strategy of computational cognitive psychophysics, fitting multiple data sets in which several parameters are varied parametrically across multiple types of judgments. The NS power model accounts for a wide range of data concerning judgments of both causal strength (the power of a cause to produce or prevent an effect) and causal structure (whether or not a causal link exists). For both types of causal judgments, a generic prior favoring a cause that is jointly necessary and sufficient explains interactions involving causal direction (generative versus preventive causes). For structure judgments, an additional prior that a new candidate cause will be deterministic (i.e., sufficient or else ineffective) explains why people’s causal structure judgments are based primarily on causal power and the base rate of the effect, rather than sample size. Alternative Bayesian formulations that lack either causal power assumptions or generic priors for necessity and sufficiency proved inadequate. Broader implications of the Bayesian framework for human learning are discussed.
منابع مشابه
Bayesian generic priors for causal learning.
The article presents a Bayesian model of causal learning that incorporates generic priors--systematic assumptions about abstract properties of a system of cause-effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes--causes that are few in number and high in their individual powers to produce or prevent effects. The SS power model couples these gen...
متن کاملModeling Causal Learning Using Bayesian Generic Priors on Generative and Preventive Powers
We present a Bayesian model of causal learning that incorporates generic priors on distributions of weights representing potential powers to either produce or prevent an effect. These generic priors favor necessary and sufficient causes. Across three experiments, the model explains the systematic pattern of human judgments observed for questions regarding support for a causal link, for both gen...
متن کاملGeneric Priors Yield Competition Between Independently-Occurring Causes
Recent work on causal learning has investigated the possible role of generic priors in guiding human judgments of causal strength. One proposal has been that people have a preference for causes that are sparse and strong—i.e., few in number and individually strong (Lu et al., 2008). Evidence for the use of sparse-and-strong priors has been obtained using a maximally simple causal set-up (a sing...
متن کاملGeneric Priors Yield Competition Between Independently-Occurring Preventive Causes
Recent work on causal learning has investigated the possible role of generic priors in guiding human judgments of causal strength. One proposal has been that people have a preference for causes that are sparse and strong—i.e., few in number and individually strong (Lu et al., 2008). Sparse-and-strong priors predict that competition can be observed between candidate causes of the same polarity (...
متن کاملEstimating human priors on causal strength
Bayesian models of human causal induction rely on assumptions about people’s priors that have not been extensively tested. We empirically estimated human priors on the strength of causal relationships using iterated learning, an experimental method where people make inferences from data generated based on their own responses in previous trials. This method produced a prior on causal strength th...
متن کامل